自2020年初以来,COVID-19的大流行对日常生活的许多方面产生了相当大的影响。在全球范围内已经采取了一系列不同的措施,以降低新感染的速度并管理国家卫生服务的压力。主要策略是通过优先考虑远程工作和教育来减少聚会和传播的潜力。当不可避免的聚会时,增强的手卫生和面膜的使用减少了病原体的扩散。这些特殊的措施提出了可靠的生物识别识别的挑战,例如用于面部,语音和手工生物识别技术。同时,新的挑战创造了新的机会和研究方向,例如对无约束的虹膜或眼周识别,基于无触摸的指纹和基于静脉的身份验证以及生物特征特征进行疾病检测的重新兴趣。本文概述了为解决这些挑战和新兴机会而进行的研究。
translated by 谷歌翻译
Data-centric artificial intelligence (data-centric AI) represents an emerging paradigm emphasizing that the systematic design and engineering of data is essential for building effective and efficient AI-based systems. The objective of this article is to introduce practitioners and researchers from the field of Information Systems (IS) to data-centric AI. We define relevant terms, provide key characteristics to contrast the data-centric paradigm to the model-centric one, and introduce a framework for data-centric AI. We distinguish data-centric AI from related concepts and discuss its longer-term implications for the IS community.
translated by 谷歌翻译
For improving short-length codes, we demonstrate that classic decoders can also be used with real-valued, neural encoders, i.e., deep-learning based codeword sequence generators. Here, the classical decoder can be a valuable tool to gain insights into these neural codes and shed light on weaknesses. Specifically, the turbo-autoencoder is a recently developed channel coding scheme where both encoder and decoder are replaced by neural networks. We first show that the limited receptive field of convolutional neural network (CNN)-based codes enables the application of the BCJR algorithm to optimally decode them with feasible computational complexity. These maximum a posteriori (MAP) component decoders then are used to form classical (iterative) turbo decoders for parallel or serially concatenated CNN encoders, offering a close-to-maximum likelihood (ML) decoding of the learned codes. To the best of our knowledge, this is the first time that a classical decoding algorithm is applied to a non-trivial, real-valued neural code. Furthermore, as the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
translated by 谷歌翻译
查找最佳消息量化是低复杂性信念传播(BP)解码的关键要求。为此,我们提出了一个浮点替代模型,该模型模仿量化效果,作为均匀噪声的添加,其幅度是可训练的变量。我们验证替代模型与定点实现的行为非常匹配,并提出了手工制作的损失功能,以实现复杂性和误差率性能之间的权衡。然后,采用一种基于深度学习的方法来优化消息位。此外,我们表明参数共享既可以确保实现友好的解决方案,又比独立参数导致更快的培训收敛。我们为5G低密度均衡检查(LDPC)代码提供模拟结果,并在浮点分解的0.2 dB内报告误差率性能,平均消息量化位低于3.1位。此外,我们表明,学到的位宽也将其推广到其他代码速率和渠道。
translated by 谷歌翻译
随着各种科学领域中数据的越来越多,生成模型在科学方法的每个步骤中都具有巨大的潜力来加速科学发现。他们最有价值的应用也许在于传统上提出假设最慢,最具挑战性的步骤。现在,正在从大量数据中学到强大的表示形式,以产生新的假设,这对从材料设计到药物发现的科学发现应用产生了重大影响。 GT4SD(https://github.com/gt4sd/gt4sd-core)是一个可扩展的开放源库,使科学家,开发人员和研究人员能够培训和使用科学发现中假设生成的最先进的生成模型。 GT4SD支持跨材料科学和药物发现的各种生成模型的用途,包括基于与目标蛋白,OMIC剖面,脚手架距离,结合能等性质的分子发现和设计。
translated by 谷歌翻译
We study iterative methods for (two-stage) robust combinatorial optimization problems with discrete uncertainty. We propose a machine-learning-based heuristic to determine starting scenarios that provide strong lower bounds. To this end, we design dimension-independent features and train a Random Forest Classifier on small-dimensional instances. Experiments show that our method improves the solution process for larger instances than contained in the training set and also provides a feature importance-score which gives insights into the role of scenario properties.
translated by 谷歌翻译
Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP). Yet, usually, the privacy of the training data comes at the cost of the resulting ML models' utility. One reason for this is that DP uses one uniform privacy budget epsilon for all training data points, which has to align with the strictest privacy requirement encountered among all data holders. In practice, different data holders have different privacy requirements and data points of data holders with lower requirements can contribute more information to the training process of the ML models. To account for this need, we propose two novel methods based on the Private Aggregation of Teacher Ensembles (PATE) framework to support the training of ML models with individualized privacy guarantees. We formally describe the methods, provide a theoretical analysis of their privacy bounds, and experimentally evaluate their effect on the final model's utility using the MNIST, SVHN, and Adult income datasets. Our empirical results show that the individualized privacy methods yield ML models of higher accuracy than the non-individualized baseline. Thereby, we improve the privacy-utility trade-off in scenarios in which different data holders consent to contribute their sensitive data at different individual privacy levels.
translated by 谷歌翻译
Despite significant progress of generative models in the natural sciences, their controllability remains challenging. One fundamentally missing aspect of molecular or protein generative models is an inductive bias that can reflect continuous properties of interest. To that end, we propose the Regression Transformer (RT), a novel method that abstracts regression as a conditional sequence modeling problem. This introduces a new paradigm of multitask language models which seamlessly bridge sequence regression and conditional sequence generation. We thoroughly demonstrate that, despite using a nominal-scale training objective, the RT matches or surpasses the performance of conventional regression models in property prediction tasks of small molecules, proteins and chemical reactions. Critically, priming the same model with continuous properties yields a highly competitive conditional generative model that outperforms specialized approaches in a substructure-constrained, property-driven molecule generation benchmark. Our dichotomous approach is facilitated by a novel, alternating training scheme that enables the model to decorate seed sequences by desired properties, e.g., to optimize reaction yield. In sum, the RT is the first report of a multitask model that concurrently excels at predictive and generative tasks in biochemistry. This finds particular application in property-driven, local exploration of the chemical or protein space and could pave the road toward foundation models in material design. The code to reproduce all experiments of the paper is available at: https://github.com/IBM/regression-transformer
translated by 谷歌翻译
3D超声(3DU)由于其能力在不使用电离辐射的情况下实时提供体积图像而导致的辐射治疗中的目标跟踪变得更有趣。在不使用基准的情况下潜在地用于跟踪。为此,用于学习有意义的表示的方法对于识别表示空间(R-Space)的不同时间帧中的解剖结构是有用的。在这项研究中,使用传统的AutoEncoder,变形自动级别和切片 - Wassersein AutoEncoder减少了3DUS斑块进入128维R空间。在R-空间中,研究了分离不同的超声贴片的能力以及识别类似斑块的基于肝脏图像的数据集进行比较。提出了评估R-空间中的跟踪能力的两个指标。结果表明,可以区分具有不同解剖结构的超声波贴片,并且可以在R空间中聚集类似的贴片。结果表明,调查的AutoEncoders对3DU中的目标跟踪具有不同的可用性水平。
translated by 谷歌翻译
提升是机器学习中最重要的发展之一。本文研究了在高维环境中量身定制的$ l_2 $增强的收敛速度。此外,我们介绍了所谓的\ textquotedblleft后升后\ textquotedblright。这是一个选择后的估计器,将普通最小二乘适用于在第一阶段选择的变量,以$ l_2 $增强。另一个变体是\ textquotedblleft正交增强\ texquotedblright \,在每个步骤之后,进行正交投影。我们表明,$ L_2 $的提升和正交增强都在稀疏,高维的环境中达到与Lasso相同的收敛速度。我们表明,经典$ L_2 $增强的收敛速率取决于稀疏特征值常数所描述的设计矩阵。为了显示后者的结果,我们基于分析$ L_2 $增强的重新审视行为,为纯贪婪算法得出了新的近似结果。我们还引入了可行的早期停止规则,可以轻松地实施和使用应用程序。我们的结果还允许在文献中缺少Lasso和Boosting之间进行直接比较。最后,我们介绍了模拟研究和应用,以说明我们的理论结果的相关性,并提供对增强的实际方面的见解。在这些模拟研究中,$ L_2 $提升明显优于套索。
translated by 谷歌翻译